Cs-621 Theory Gems
نویسندگان
چکیده
Today, we will briefly discuss an important technique in probability theory – measure concentration. Roughly speaking, measure concentration corresponds to exploiting a phenomenon that some functions of random variables are highly concentrated around their expectation/median. The main example that will be of our interest here is Johnson-Lindenstrauss (JL) lemma. The JL lemma is a very powerful tool for dimensionality reduction in high-dimensional Euclidean spaces and it is widely used to alleviate the curse of dimensionality that occurs in applications where one needs to deal with high-dimensional data.
منابع مشابه
Cs-621 Theory Gems
That is, we want the hyperplane corresponding to (w, θ) to separate the positive examples from the negative ones. As we already argued previously, wlog we can constrain ourselves to the case when θ = 0 (i.e., the hyperplane passes through the origin) and there are only positive examples (i.e., l = 1, for all j). Last time we presented a simple algorithm for this problem called Perceptron algori...
متن کاملCs-621 Theory Gems
In Lecture 10, we introduced a fundamental object of spectral graph theory: the graph Laplacian, and established some of its basic properties. We then focused on the task of estimating the value of eigenvalues of Laplacians. In particular, we proved the Courant-Fisher theorem that is instrumental in obtaining upper-bounding estimates on eigenvalues. Today, we continue by showing a technique – s...
متن کاملCs-621 Theory Gems
As we mentioned in the last lecture, one of the core components of our algorithm for distinct elements problem is a space-efficient construction of 2-wise independent functions. Formally, a k-wise independent hash function f : [m]→ [T ] is a randomized function that provides the guarantee that, for any k distinct elements j1, . . . , jk ∈ [m] and any k possible values t1, . . . , tk ∈ [T ], the...
متن کاملCs-621 Theory Gems 1 Learning Non-linear Classifiers
In the previous lectures, we have focused on finding linear classifiers, i.e., ones in which the decision boundary is a hyperplane. However, in many scenarios the data points cannot be really classified in this manner, as there simply might be no hyperplane that separates most of the positive examples from the negative ones see, e.g., Figure 1 (a). Clearly, in such situations one needs to resor...
متن کاملCs-621 Theory Gems 2 the Stock Market Model
The main topic of this lecture is the learning-from-expert-advice framework. Our goal here is to be able to predict, as accurately as possible, a sequence of events in a situation when our only information about the future is coming from recommendations of a set of “experts”. The key feature (and difficulty) of this scenario is that most – if not all – of these experts (and thus their recommend...
متن کامل